Galaxy 6600 GT AGP 128MB review

Graphics cards 1049 Page 8 of 17 Published by

teaser

Page 8

Overclocking & TweakingBefore we dive into an wide-ranging series of tests and benchmarks, we need to explain overclocking. With most videocards, we can do some easy tricks to boost the overall performance a little. You can do this at two levels, namely tweaking by enabling registry or BIOS hacks, or even tamper with Image Quality. And then there is overclocking, which by far will give you the best possible results.

What do we need?
One of the best tool for overclocking NVIDIA and ATI videocards is our own Rivatuner that you can
download here. If you own a NVIDIA graphics card then NVIDIA actually has very nice built in options for you that can be found in the display driver properties. They are hidden though and you'll need to enable it by installing a small registry hack called CoolBits, which you can download right here (after downloading and unpacking just click the .reg file twice and confirm the import).

Where should we go ?
Overclocking: by increasing the frequency of the videocard's memory and GPU, we can make the videocard increase its calculation clock cycles per second. It sounds hard but it really can be done in less then a few minutes. I always tend to recommend to novice users and beginners not to increase the frequency any higher then 5-10% of the core and memory clock. Example: If your card would run at 300 MHz then I suggest you don't increase the frequency any higher than 330 MHz.

More advanced users push the frequency often way higher. Usually when your 3D graphics will start to show artifacts such as white dots ("snow"), you should go down 10-15 MHz and leave it at that.

The core can be somewhat different. Usually when you are overclocking too hard, it'll start to show artifacts, empty polygons or it will even freeze. I recommend that you back down at least 15 MHz from the moment you notice an artifact. Look carefully and observe well.

All in all... do it at your own risk.

Copyright Guru3D.com 2004


Overclocking your card too far or constantly to its maximum limit might damage your card and it's usually not covered by your warranty.

You will benefit from overclocking the most with a product that is limited or you may call it "tuned down." We know that this graphics core is often limited by tact frequency or bandwidth limitation, therefore by increasing the memory and core frequency we should be able to witness some higher performance results. A simple trick to get some more bang for your buck.

The GeForce 6600 GT AGP from Galaxy at default is already overclocked a little for you. The GPU clock speed is doing 525 MHz. Its DDR memory is (2x)525, thus 1050 MHz.

We used the new WHQL 66.97 for this test. The default clock setting you can alter them by using Rivatuner, which you can download here or CoolBits, which you can download right here.

Unfortionately due to an issue with our initial sample we had to wait for a replacement to show you some overclocking results. We received that sample yesterday, when the this article was already was finished. Therefore on the following pages you will not see any included overclocking results, which is something we normally do. We however did some tests with this new sample and it is promising. It allows a core frequency of roughly 550 MHz and 1240 MHz on the memory. That will of course result into a higher average framerate. Here are some results from Splinter cell and Aquamark 3.

Splinter Cell 1.2b

800x600

1024x768

1280x1024

1600x1200

6600GT 128MB AGP Reference

62

54

40

34

6600GT 128MB AGP Galaxy

69

60

45

38

6600GT 128MB AGP Galaxy OC

73

66

50

42

AquaMark 3

800x600

1024x768

1280x1024

1600x1200

6600GT 128MB AGP Reference

58

53

46

38

6600GT 128MB AGP Galaxy

59

55

48

40

6600GT 128MB AGP Galaxy OC

62

58

51

43

6600GT 128MB AGP Galaxy is the card in overclocked status. As you can see that's a nice gain in extra performance.

One small reminder though, our overclocking results are never a guarantee for your results. Manufacturers' choices in components differ and so will the end-results. This however is a good indication of what is possible (or not).

The Test System
Now we begin the benchmark portion of this article, but first let me show you our test system.

Benchmark Software Suite:

  • Far Cry (Guru3D config & timedemo)
  • Splinter Cell (Guru3D custom timedemo)
Half-Life 2 (Guru3D custom timedemo) 3DMark03 3DMark05 AquaMark 3 Unreal Tournament 2004 (Guru3D custom timedemo) Doom 3 Halo: Combat Evolved

Remark


Image Quality between ATI and NVIDIA cards really is about equal, yet driver optimizations have made it very hard to do a 100% 1:1 performance comparison. ATI has enabled Trilinear optimizations in their X800 series at default, so we enabled that option for the GeForce Series 6 also.

The Anisotropic Filtering settings that enables themselves in the ForceWare drivers when you enable AF/AA settings have been disabled by us unless noted otherwise to make the benchmarks as objective as they can be for future comparisons.

All tests where made in 32 bit per pixel color in resolutions ranging from 800x600 pixels up to the Godfather of all gaming resolutions: 1600x1200 We also ran all tests with 4X Antialiasing and 8X Anisotropic Filtering where possible.

 

The numbers (FPS = Frames Per Second)

 

Now what you need to observe is simple, the numbers versus the screen resolution. The higher the better.

The numbers represent what we call FPS, this means Frames per second. A game's Frames per second is a measured average of a series of test. That test often is a timedemo, a recorded part of the game which is a 1:1 representation of the actual game(play). After forcing the same image quality settings this timedemo is then used for all graphics cards so that the actual measuring is as objective as can be for all graphics cards. in todays article a GeForce 6800 GT PCI-Express.

If a card reaches >30 FPS then the card is barely able to play the game. With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain, intensive on the graphics card, parts.

When a graphics card is doing 60 FPS at average or higher then you can rest assured that the game will likely play extremely smooth at every point in the game.

You are always aiming for the highest possible FPS versus the highest resolution versus the highest image quality.
 

Frames per second Gameplay
 <30 FPS very limited gameplay
30-40 FPS average yet playable
40-60  FPS good gameplay
>60 FPS best possible gameplay
 

           

 

Note
Before we start with the benchmarks I need to make something very clear to you about the test systems used. The GeForce 6600 GT AGP has a slight disadvantage over PCI-Express as the test systems used differ a tiny bit. As this article really is about the performance difference between the PCI-Express and AGP version I wanted, no needed to do a direct comparison between that line-up of product. The problem then is the test systems as we have a platform with PCI-Express and AGP.

We took an AGP8x Intel 865PE (socket 478) and an PCI Express Intel 915P (socket 775) motherboard and configured both with equal settings upto the Mhz precise on FSB/DDR and all related settings.

The big difference however is that the PCI-Express system has a 3.6 GHz processor and the AGP system a 3.4 GHz CPU. So you see there is a 200 MHz difference. In reality both systems perform quite close to one another, yet the AGP system is at a small disadvantage that can become apparent with CPU limited games.

Share this content
Twitter Facebook Reddit WhatsApp Email Print